- Title
- Optimal Actor-Critic Policy With Optimized Training Datasets
- Creator
- Banerjee, Chayan; Chen, Zhiyong; Noman, Nasimul; Zamani, Mohsen
- Relation
- IEEE Transactions on Emerging Topics in Computational Intelligence Vol. 6, Issue 6, p. 1324-1334
- Publisher Link
- http://dx.doi.org/10.1109/TETCI.2022.3140375
- Publisher
- Institute of Electrical and Electronics Engineers (IEEE)
- Resource Type
- journal article
- Date
- 2022
- Description
- Actor-critic(AC) algorithms are known for their efficacy and high performance in solving reinforcement learning problems, but they also suffer from low sampling efficiency. An AC based policy optimization process is iterative and needs to access the agent-environment to evaluate and update the policy by rolling out the policy, collecting rewards and states (i.e. samples), and learning from them. It ultimately requires a huge number of samples to learn an optimal policy. To improve sampling efficiency, we propose a strategy to optimize the training dataset that contains significantly less samples collected from the AC process. The dataset optimization is made of a best episode only operation, a policy parameter-fitness model, and a genetic algorithm module. The optimal policy network trained by the optimized training dataset exhibits superior performance compared to many contemporary AC algorithms in controlling autonomous dynamical systems. Evaluation on standard benchmarks shows that the method improves sampling efficiency, ensures faster convergence to optima, and is more data-efficient than its counterparts.
- Subject
- actor critic; reinforcement learning; policy optimization; genetic algorithm; training dataset optimization
- Identifier
- http://hdl.handle.net/1959.13/1468202
- Identifier
- uon:48020
- Identifier
- ISSN:2471-285X
- Language
- eng
- Reviewed
- Hits: 704
- Visitors: 702
- Downloads: 0